Double Targeted Universal Adversarial Perturbations

نویسندگان

چکیده

Despite their impressive performance, deep neural networks (DNNs) are widely known to be vulnerable adversarial attacks, which makes it challenging for them deployed in security-sensitive applications, such as autonomous driving. Image-dependent perturbations can fool a network one specific image, while universal capable of fooling samples from all classes without selection. We introduce double targeted (DT-UAPs) bridge the gap between instance-discriminative image-dependent and generic perturbations. This perturbation attacks source class sink class, having limited effect on other non-targeted classes, avoiding raising suspicions. Targeting simultaneously, we term attack (DTA). provides an attacker with freedom perform precise DNN model little suspicion. show effectiveness proposed DTA algorithm wide range datasets also demonstrate its potential physical (Code: https://github.com/phibenz/double-targeted-uap.pytorch).

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Defense against Universal Adversarial Perturbations

Recent advances in Deep Learning show the existence of image-agnostic quasi-imperceptible perturbations that when applied to ‘any’ image can fool a state-of-the-art network classifier to change its prediction about the image label. These ‘Universal Adversarial Perturbations’ pose a serious threat to the success of Deep Learning in practice. We present the first dedicated framework to effectivel...

متن کامل

Analysis of universal adversarial perturbations

Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we propose a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and ...

متن کامل

Learning Universal Adversarial Perturbations with Generative Models

Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification. It was recently shown that given a dataset and classifier, there exists so called universal adversarial perturbations, a single perturbation that causes a misclassification when applied to any input. In t...

متن کامل

Art of singular vectors and universal adversarial perturbations

Vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has been attracting a lot of attention in recent studies. It has been shown that for many state of the art DNNs performing image classification there exist universal adversarial perturbations — image-agnostic perturbations mere addition of which to natural images with high probability leads to their misclassification. In this w...

متن کامل

Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations

Machine learning models are susceptible to adversarial perturbations: small changes to input that can cause large changes in output. It is also demonstrated that there exist input-agnostic perturbations, called universal adversarial perturbations, which can change the inference of target model on most of the data samples. However, existing methods to craft universal perturbations are (i) task s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2021

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-030-69538-5_18